60 research outputs found

    Sustainable Cyberinfrastructure Software Through Open Governance

    Get PDF
    The authors discuss their position on open governance, open source software, and sustainability

    CTSC Recommended Security Practices for Thrift Clients: Case Study - Evernote

    Get PDF
    The Science Gateway Platform (SciGaP, scigap.org ) will provide services to help communities create Science Gateways. SciGaP (via Apache Airavata) will use the Apache Thrift framework ( thrift.apache.org ), a language independent, richly typed interface definition language (IDL) to generate both client and server software development kits (SDKs). Thrift takes a departure from many public services in that it is not a RESTful( http://en.wikipedia.org/wiki/Representational_state_transfer ) API. To gain a better understanding of Thrift (for the CTSC-SciGaP engagement), we examine an existing application/service that uses it: Evernote (evernote.com). Hopefully, the design and use cases of Evernote will help inform the design and use cases of SciGaP, at least from a security perspective. This document provides an overview of Evernote with an emphasis on its Cloud API, some examples of its SDKs, and a list of recommended practices for using Evernote.National Science Foundation, Grant Number 1234408

    Authentication and Authorization Considerations for a Multi-tenant Service

    Get PDF
    Distributed cyberinfrastructure requires users (and machines) to perform some sort of authentication and authorization (together simply known as "auth"). In the early days of com- puting, authentication was performed with just a username and password combination, and this is still prevalent today. But during the past several years, we have seen an evolution of approaches and protocols for auth: Kerberos, SSH keys, X.509, OpenID, API keys, OAuth, and more. Not surpris- ingly, there are trade-offs, both technical and social, for each approach. The NSF Science Gateway communities have had to deal with a variety of auth issues. However, most of the early gateways were rather restrictive in their model of access and development. The practice of using community credentials (certificates), a well-intentioned idea to alleviate restrictive access, still posed a barrier to researchers and challenges for security and auditing. And while the web portal-based gate- way clients offered users easy access from a browser, both the interface and the back-end functionality were constrained in the flexibility and extensibility they could provide. Design- ing a well-defined application programming interface (API) to fine-grained, generic gateway services (on secure, hosted cyberinfrastructure), together with an auth approach that has a lower barrier to entry, will hopefully present a more welcoming environment for both users and developers. This paper provides a review and some thoughts on these topics, with a focus on the role of auth between a Science Gateway and a service provider.National Science Foundation, Grant Numbers 1339774 and 1234408

    A Credential Store for Multi-tenant Science Gateways

    Get PDF
    Science Gateways bridge multiple computational grids and clouds, acting as overlay cyberinfrastructure. Gateways have three logical tiers: a user interfacing tier, a resource tier and a bridging middleware tier. Different groups may operate these tiers. This introduces three security challenges. First, the gateway middleware must manage multiple types of credentials associated with different resource providers. Second, the separation of the user interface and middleware layers means that security credentials must be securely delegated from the user interface to the middleware. Third, the same middleware may serve multiple gateways, so the middleware must correctly isolate user credentials associated with different gateways. We examine each of these three scenarios, concentrating on the requirements and implementation of the middleware layer. We propose and investigate the use of a Credential Store to solve the three security challenges

    The CSBG - LSU Gateway: Web based hosted gateway for computational system biology application tools from Louisiana state university

    Get PDF
    © 2018 Copyright held by the owner/author(s). Science gateways are identified as an effective way to publish and distribute software for research communities without the burden of learning HPC (High Performance Computer) systems. In the past, researchers were expected to have in-depth knowledge about using HPC systems for computations along with their respective science field in order to do effective research. Science gateways eliminate the need to learn HPC systems and allows the research communities to focus more on their science and let the gateway handle communicating with HPCs. In this poster we are presenting the science gateway project of CSBG (Computational System Biology Group - www.brylinski.org) of Department of Biological Sciences with Center for Computation & Technology at LSU (Louisiana State University). The gateway project was initiated in order to provide CSBG software tools as a service through a science gateway

    Ultrascan solution modeler: integrated hydrodynamic parameter and small angle scattering computation and fitting tools

    Get PDF
    This is a preprint of a paper in the proceedings of the XSEDE12 conference, held July 16-19, 2012 in Chicago, IL. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.UltraScan Solution Modeler (US-SOMO) processes atomic and lower-resolution bead model representations of biological and other macromolecules to compute various hydrodynamic parameters, such as the sedimentation and diffusion coefficients, relaxation times and intrinsic viscosity, and small angle scattering curves, that contribute to our understanding of molecular structure in solution. Knowledge of biological macromolecules' structure aids researchers in understanding their function as a path to disease prevention and therapeutics for conditions such as cancer, thrombosis, Alzheimer's disease and others. US-SOMO provides a convergence of experimental, computational, and modeling techniques, in which detailed molecular structure and properties are determined from data obtained in a range of experimental techniques that, by themselves, give incomplete information. Our goal in this work is to develop the infrastructure and user interfaces that will enable a wide range of scientists to carry out complicated experimental data analysis techniques on XSEDE. Our user community predominantly consists of biophysics and structural biology researchers. A recent search on PubMed reports 9,205 papers in the decade referencing the techniques we support. We believe our software will provide these researchers a convenient and unique framework to refine structures, thus advancing their research. The computed hydrodynamic parameters and scattering curves are screened against experimental data, effectively pruning potential structures into equivalence classes. Experimental methods may include analytical ultracentrifugation, dynamic light scattering, small angle X-ray and neutron scattering, NMR, fluorescence spectroscopy, and others. One source of macromolecular models is X-ray crystallography. However, the conformation in solution may not match that observed in the crystal form. Using computational techniques, an initial fixed model can be expanded into a search space utilizing high temperature molecular dynamic approaches or stochastic methods such as Brownian dynamics. The number of structures produced can vary greatly, ranging from hundreds to tens of thousands or more. This introduces a number of cyberinfrastructure challenges. Computing hydrodynamic parameters and small angle scattering curves can be computationally intensive for each structure, and therefore cluster compute resources are essential for timely results. Input and output data sizes can vary greatly from less than 1 MB to 2 GB or more. Although the parallelization is trivial, along with data size variability there is a large range of compute sizes, ranging from one to potentially thousands of cores with compute time of minutes to hours. In addition to the distributed computing infrastructure challenges, an important concern was how to allow a user to conveniently submit, monitor and retrieve results from within the C++/Qt GUI application while maintaining a method for authentication, approval and registered publication usage throttling. Middleware supporting these design goals has been integrated into the application with assistance from the Open Gateway Computing Environments (OGCE) collaboration team. The approach was tested on various XSEDE clusters and local compute resources. This paper reviews current US-SOMO functionality and implementation with a focus on the newly deployed cluster integration.This work was supported by NIH grant K25GM090154 to EB, NSF grant OCI-1032742 to MP, NSF grant TG-MCB070040N to BD, and NIH grant RR-022200 to B

    Science Gateway Operational Sustainability: Adopting a Platform-as-a Service Approach

    Get PDF
    The authors discuss their position on operational sustainability for web-based science gateways

    A High Throughput Workflow Environment for Cosmological Simulations

    Get PDF
    The next generation of wide-area sky surveys offer the power to place extremely precise constraints on cosmological parameters and to test the source of cosmic acceleration. These observational programs will employ multiple techniques based on a variety of statistical signatures of galaxies and large-scale structure. These techniques have sources of systematic error that need to be understood at the percent-level in order to fully leverage the power of next-generation catalogs. Simulations of large-scale structure provide the means to characterize these uncertainties. We are using XSEDE resources to produce multiple synthetic sky surveys of galaxies and large-scale structure in support of science analysis for the Dark Energy Survey. In order to scale up our production to the level of fifty 10^10-particle simulations, we are working to embed production control within the Apache Airavata workflow environment. We explain our methods and report how the workflow has reduced production time by 40% compared to manual management.Comment: 8 pages, 5 figures. V2 corrects an error in figure
    corecore